An error bound in the Sudakov-Fernique inequality

نویسنده

  • Sourav Chatterjee
چکیده

We obtain an asymptotically sharp error bound in the classical Sudakov-Fernique comparison inequality for finite collections of gaussian random variables. Our proof is short and selfcontained, and gives an easy alternative argument for the classical inequality, extended to the case of non-centered processes. 1 Statement of the result Gaussian comparison inequalities are among the most important tools in the theory of gaussian processes, and the Sudakov-Fernique inequality (named after Sudakov [11, 12] and Fernique [3]) is perhaps the most widely used member of that class. We will concentrate on the Sudakov-Fernique inequality in this article; general discussions about comparison inequalities can be found in Adler [1], Fernique [4], Ledoux & Talagrand [9], and Lifshits [10]. The classical Sudakov-Fernique inequality goes as follows: Theorem 1.1. [Sudakov-Fernique inequality] Let {Xi, i ∈ I} and {Yi, i ∈ I} be two centered gaussian processes indexed by the same indexing set I. Suppose that both the processes are almost surely bounded. For each i, j ∈ I, let γ ij = E(Xi −Xj) and γ ij = E(Yi − Yj). If γ ij ≤ γ ij for all i, j, then E(supi∈I Xi) ≤ E(supi∈I Yi). As mentioned before, this inequality is attributed to Sudakov [11, 12] and Fernique [3]. Later proofs were given in Alexander [2] and an unpublished work of S. Chevet. Important variants were proved by Gordon [5, 6, 7] and Kahane [8]. More recently, Vitale [14] has shown, through a clever argument, that we only need E(Xi) = E(Yi) instead of E(Xi) = E(Yi) = 0 in the hypothesis of Theorem 1.1. We will prove the following result, which gives an sharp error bound when the indexing set is finite, and also contains Vitale’s extension of the Sudakov-Fernique inequality. Theorem 1.2. Let (X1, . . . ,Xn) and (Y1, . . . , Yn) be gaussian random vectors with E(Xi) = E(Yi) for each i. For 1 ≤ i, j ≤ n, let γ ij = E(Xi − Xj) and γ ij = E(Yi − Yj), and let γ = max1≤i,j≤n |γX ij − γ ij |. Then |E( max 1≤i≤n Xi)− E( max 1≤i≤n Yi)| ≤ √ γ log n. Moreover, if γ ij ≤ γ ij for all i, j, then E(maxi Xi) ≤ E(maxi Yi). 1 The asymptotic sharpness of the error bound is easy to see from the case where all the Xi’s are independent standard normals and all the Yi’s are zero. 2 Proof We first need to state the following well-known “integration by parts” lemma: Lemma 2.1. If F : R → R is a C function of moderate growth at infinity, and X = (X1, . . . ,Xn) is a centered Gaussian random vector, then for any 1 ≤ i ≤ n, E(XiF (X)) = n ∑ j=1 E(XiXj)E ( ∂F ∂xi (X) ) . A proof of this lemma can be found in the appendix of [13], for example. Proof of Theorem 1.2. Let X = (X1, . . . ,Xn) and Y = (Y1, . . . , Yn). Without loss of generality, we may assume that X and Y are defined on the same probability space and are independent. Fix β > 0, and define Fβ : R n → R as: Fβ(x) := β −1 log ( n ∑

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Comparison and Anti-concentration Bounds for Maxima of Gaussian Random Vectors

Slepian and Sudakov-Fernique type inequalities, which compare expectations of maxima of Gaussian random vectors under certain restrictions on the covariance matrices, play an important role in probability theory, especially in empirical process and extreme value theories. Here we give explicit comparisons of expectations of smooth functions and distribution functions of maxima of Gaussian rando...

متن کامل

Improved Cramer-Rao Inequality for Randomly Censored Data

As an application of the improved Cauchy-Schwartz inequality due to Walker (Statist. Probab. Lett. (2017) 122:86-90), we obtain an improved version of the Cramer-Rao inequality for randomly censored data derived by Abdushukurov and Kim (J. Soviet. Math. (1987) pp. 2171-2185). We derive a lower bound of Bhattacharya type for the mean square error of a parametric function based on randomly censor...

متن کامل

The Structure of Bhattacharyya Matrix in Natural Exponential Family and Its Role in Approximating the Variance of a Statistics

In most situations the best estimator of a function of the parameter exists, but sometimes it has a complex form and we cannot compute its variance explicitly. Therefore, a lower bound for the variance of an estimator is one of the fundamentals in the estimation theory, because it gives us an idea about the accuracy of an estimator. It is well-known in statistical inference that the Cram&eac...

متن کامل

Some applications of the Malliavin calculus to sub-Gaussian and non-sub-Gaussian random elds

We introduce a boundedness condition on the Malliavin derivative of a random variable to study subGaussian and other non-Gaussian properties of functionals of random …elds, with particular attention to the estimation of suprema. We relate the boundedness of nth Malliavin derivatives to a new class of “sub-nth Gaussian chaos” processes. An expected supremum estimation, extending the DudleyFerniq...

متن کامل

Combining PAC-Bayesian and Generic Chaining Bounds

There exist many different generalization error bounds in statistical learning theory. Each of these bounds contains an improvement over the others for certain situations or algorithms. Our goal is, first, to underline the links between these bounds, and second, to combine the different improvements into a single bound. In particular we combine the PAC-Bayes approach introduced by McAllester (1...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2005